76 research outputs found

    Towards endowing collaborative robots with fast learning for minimizing tutors’ demonstrations: what and when to do?

    Get PDF
    Programming by demonstration allows non-experts in robot programming to train the robots in an intuitive manner. However, this learning paradigm requires multiple demonstrations of the same task, which can be time-consuming and annoying for the human tutor. To overcome this limitation, we propose a fast learning system – based on neural dynamics – that permits collaborative robots to memorize sequential information from single task demonstrations by a human-tutor. Important, the learning system allows not only to memorize long sequences of sub-goals in a task but also the time interval between them. We implement this learning system in Sawyer (a collaborative robot from Rethink Robotics) and test it in a construction task, where the robot observes several human-tutors with different preferences on the sequential order to perform the task and different behavioral time scales. After learning, memory recall (of what and when to do a sub-task) allows the robot to instruct inexperienced human workers, in a particular human-centered task scenario.POFC - Programa Operacional Temático Factores de Competitividade(POCI-01-0247-FEDER-024541

    Combining spatial and parametric working memory in a dynamic neural field model

    Get PDF
    We present a novel dynamic neural field model consisting of two coupled fields of Amari-type which supports the existence of localized activity patterns or “bumps” with a continuum of amplitudes. Bump solutions have been used in the past to model spatial working memory. We apply the model to explain input-specific persistent activity that increases monotonically with the time integral of the input (parametric working memory). In numerical simulations of a multi-item memory task, we show that the model robustly memorizes the strength and/or duration of inputs. Moreover, and important for adaptive behavior in dynamic environments, the memory strength can be changed at any time by new behaviorally relevant information. A direct comparison of model behaviors shows that the 2-field model does not suffer the problems of the classical Amari model when the inputs are presented sequentially as opposed to simultaneously

    Goals and means in action observation : a computational approach

    Get PDF
    Many of our daily activities are supported by behavioural goals that guide the selection of actions, which allow us to reach these goals effectively. Goals are considered to be important for action observation since they allow the observer to copy the goal of the action without the need to use the exact same means. The importance of being able to use different action means becomes evident when the observer and observed actor have different bodies (robots and humans) or bodily measurements (parents and children), or when the environments of actor and observer differ substantially (when an obstacle is present or absent in either environment). A selective focus on the action goals instead of the action means furthermore circumvents the need to consider the vantage point of the actor, which is consistent with recent findings that people prefer to represent the actions of others from their own individual perspective. In this paper, we use a computational approach to investigate how knowledge about action goals and means are used in action observation. We hypothesise that in action observation human agents are primarily interested in identifying the goals of the observed actor’s behaviour. Behavioural cues (e.g. the way an object is grasped) may help to disambiguate the goal of the actor (e.g. whether a cup is grasped for drinking or handing it over). Recent advances in cognitive neuroscience are cited in support of the model’s architecture

    Feature extraction using Poincaré plots for gait classification

    Get PDF
    The aim of this study is to evaluate different features, extracted from a PoincarĂ© plot of gait signals, in their ability to classify the gait of patients with neurodegenerative diseases: Parkinson’s disease (PD) and Huntington’s disease (HD). Five different features that describe gait variability were extracted from the PoincarĂ© plots of two gait signals: stride time and percentage of stride time spent in swing phase. Among the set of extracted features, those that displayed significant differences between the two groups and were not correlated with each other, were used as input to the support vector machine classifier. It was found that all extracted features (with exception of one feature in PD vs healthy group comparison) are significantly different between healthy and pathological subjects and are suitable to discriminate them (with accuracies greater than 80%). When comparing PD vs HD, just three features were significantly different, however, a relatively good classification accuracy (around 72%) was achieved using two of them. The results demonstrate that it is feasible to apply variability measures extracted from PoincarĂ© plots of gait data signals in gait classification problems

    Neural field model for measuring and reproducing time intervals

    Get PDF
    The continuous real-time motor interaction with our environment requires the capacity to measure and produce time intervals in a highly flexible manner. Recent neurophysiological evidence suggests that the neural computational principles supporting this capacity may be understood from a dynamical systems perspective: Inputs and initial conditions determine how a recurrent neural network evolves from a “resting state” to a state triggering the action. Here we test this hypothesis in a time measurement and time reproduction experiment using a model of a robust neural integrator based on the theoretical framework of dynamic neural fields. During measurement, the temporal accumulation of input leads to the evolution of a self-stabilized bump whose amplitude reflects elapsed time. During production, the stored information is used to reproduce on a trial-by-trial basis the time interval either by adjusting input strength or initial condition of the integrator. We discuss the impact of the results on our goal to endow autonomous robots with a human-like temporal cognition capacity for natural human-robot interactions.The work received financial support from FCT through the PhD fellowship PD/BD/128183/2016, the project ”Neurofield” (POCI-01-0145-FEDER-031393) and the research centre CMAT within the project UID/MAT/00013/2013

    A dynamic neural field approach to natural and efficient human-robot collaboration

    Get PDF
    A major challenge in modern robotics is the design of autonomous robots that are able to cooperate with people in their daily tasks in a human-like way. We address the challenge of natural human-robot interactions by using the theoretical framework of dynamic neural fields (DNFs) to develop processing architectures that are based on neuro-cognitive mechanisms supporting human joint action. By explaining the emergence of self-stabilized activity in neuronal populations, dynamic field theory provides a systematic way to endow a robot with crucial cognitive functions such as working memory, prediction and decision making . The DNF architecture for joint action is organized as a large scale network of reciprocally connected neuronal populations that encode in their firing patterns specific motor behaviors, action goals, contextual cues and shared task knowledge. Ultimately, it implements a context-dependent mapping from observed actions of the human onto adequate complementary behaviors that takes into account the inferred goal of the co-actor. We present results of flexible and fluent human-robot cooperation in a task in which the team has to assemble a toy object from its components.The present research was conducted in the context of the fp6-IST2 EU-IP Project JAST (proj. nr. 003747) and partly financed by the FCT grants POCI/V.5/A0119/2005 and CONC-REEQ/17/2001. We would like to thank Luis Louro, Emanuel Sousa, Flora Ferreira, Eliana Costa e Silva, Rui Silva and Toni Machado for their assistance during the robotic experiment

    Universal neural field computation

    Full text link
    Turing machines and G\"odel numbers are important pillars of the theory of computation. Thus, any computational architecture needs to show how it could relate to Turing machines and how stable implementations of Turing computation are possible. In this chapter, we implement universal Turing computation in a neural field environment. To this end, we employ the canonical symbologram representation of a Turing machine obtained from a G\"odel encoding of its symbolic repertoire and generalized shifts. The resulting nonlinear dynamical automaton (NDA) is a piecewise affine-linear map acting on the unit square that is partitioned into rectangular domains. Instead of looking at point dynamics in phase space, we then consider functional dynamics of probability distributions functions (p.d.f.s) over phase space. This is generally described by a Frobenius-Perron integral transformation that can be regarded as a neural field equation over the unit square as feature space of a dynamic field theory (DFT). Solving the Frobenius-Perron equation yields that uniform p.d.f.s with rectangular support are mapped onto uniform p.d.f.s with rectangular support, again. We call the resulting representation \emph{dynamic field automaton}.Comment: 21 pages; 6 figures. arXiv admin note: text overlap with arXiv:1204.546

    A neural integrator model for planning and value-based decision making of a robotics assistant

    Get PDF
    Modern manufacturing and assembly environments are characterized by a high variability in the built process which challenges human–robot cooperation. To reduce the cognitive workload of the operator, the robot should not only be able to learn from experience but also to plan and decide autonomously. Here, we present an approach based on Dynamic Neural Fields that apply brain-like computations to endow a robot with these cognitive functions. A neural integrator is used to model the gradual accumulation of sensory and other evidence as time-varying persistent activity of neural populations. The decision to act is modeled by a competitive dynamics between neural populations linked to different motor behaviors. They receive the persistent activation pattern of the integrators as input. In the first experiment, a robot learns rapidly by observation the sequential order of object transfers between an assistant and an operator to subsequently substitute the assistant in the joint task. The results show that the robot is able to proactively plan the series of handovers in the correct order. In the second experiment, a mobile robot searches at two different workbenches for a specific object to deliver it to an operator. The object may appear at the two locations in a certain time period with independent probabilities unknown to the robot. The trial-by-trial decision under uncertainty is biased by the accumulated evidence of past successes and choices. The choice behavior over a longer period reveals that the robot achieves a high search efficiency in stationary as well as dynamic environments.The work received financial support from FCT through the PhD fellowships PD/BD/128183/2016 and SFRH/BD/124912/2016, the project “Neurofield” (PTDC/MAT-APL/31393/2017) and the research centre CMAT within the project UID/MAT/00013/2013

    Forward displacements of fading objects in motion: the role of transient signals in perceiving position

    Get PDF
    Visual motion causes mislocalisation phenomena in a variety of experimental paradigms. For many displays objects are perceived as displaced 'forward' in the direction of motion. However, in some cases involving the abrupt stopping or reversal of motion the forward displacements are not observed. We propose that the transient neural signals at the offset of a moving object play a crucial role in accurate localisation. In the present study, we eliminated the transient signals at motion offset by gradually reducing the luminance of the moving object. Our results show that the 'disappearance threshold' for a moving object is lower than the detection threshold for the same object without a motion history. In units of time this manipulation led to a forward displacement of the disappearance point by 175ms. We propose an explanation of our results in terms of two processes: Forward displacements are caused by internal models predicting positions of moving objects. The usually observed correct localisation of stopping positions, however, is based on transient inputs that retroactively attenuate errors that internal models might otherwise cause. Both processes are geared to reducing localisation errors for moving objects

    Interferences in the Transformation of Reference Frames during a Posture Imitation Task

    Get PDF
    We present a biologically-inspired neural model addressing the problem of transformations across frames of reference in a posture imitation task. Our modeling is based on the hypothesis that imitation is mediated by two concurrent transformations selectively sensitive to spatial and anatomical cues. In contrast to classical approaches, we also assume that separate instances of this pair of transformations are responsible for the control of each side of the body. We also devised an experimental paradigm which allowed us to model the interference patterns caused by the interaction between the anatomical on one hand, and the spatial imitative strategy on the other hand. The results from our simulation studies thus provide predictions of real behavioral responses
    • 

    corecore